121 research outputs found

    The dynamics of poverty: Why don't "the poor" act collectively?

    Get PDF
    Hunger, Types of poverty, Escaping poverty, Poverty reduction, Housing support, Reasons for descent into poverty,

    Understanding, measuring and utilizing social capital: clarifying concepts and presenting a field application from India

    Get PDF
    Social capital is a resource, a propensity for mutually beneficial collective action that communities possess to different extents. Communities with high levels of social capital are able to act together collectively for achieving diverse common objectives. While the concept of social capital is valid universally, the measure of social capital will vary by context. It must be related in each case to aspects of social relations that assist mutually beneficial collective action within that particular cultural context. A locally relevant scale of social capital was developed to assess whether and how social capital mattered for development performance in 69 north Indian villages. Variables corresponding to other bodies of explanation, including extent of commercialization, relative stratification, and relative need were also examined, but a combination of high social capital and capable agency was found to associate most closely with high development performance. Agency is important particularly in situations where institutions are not available that enable citizens to connect with the state and with markets. The productivity of social capital is considerably reduced on account of this institutional gap in the middle. Development performance can be improved in these situations by adding to the stock of social capital and also through enhancing agency capacity.Social capital, Collective action, Social networks., Development, Capacity,

    Magic state distillation with punctured polar codes

    Get PDF
    We present a scheme for magic state distillation using punctured polar codes. Our results build on some recent work by Bardet et al. (ISIT, 2016) who discovered that polar codes can be described algebraically as decreasing monomial codes. Using this powerful framework, we construct tri-orthogonal quantum codes (Bravyi et al., PRA, 2012) that can be used to distill magic states for the TT gate. An advantage of these codes is that they permit the use of the successive cancellation decoder whose time complexity scales as O(Nlog⁡(N))O(N\log(N)). We supplement this with numerical simulations for the erasure channel and dephasing channel. We obtain estimates for the dimensions and error rates for the resulting codes for block sizes up to 2202^{20} for the erasure channel and 2162^{16} for the dephasing channel. The dimension of the triply-even codes we obtain is shown to scale like O(N0.8)O(N^{0.8}) for the binary erasure channel at noise rate 0.010.01 and O(N0.84)O(N^{0.84}) for the dephasing channel at noise rate 0.0010.001. The corresponding bit error rates drop to roughly 8×10−288\times10^{-28} for the erasure channel and 7×10−157 \times 10^{-15} for the dephasing channel respectively.Comment: 18 pages, 4 figure

    Experimentally Testable Noncontextuality Inequalities Via Fourier-Motzkin Elimination

    Get PDF
    Generalized noncontextuality as defined by Spekkens is an attempt to make precise the distinction between classical and quantum theories. It is a restriction on models used to reproduce the predictions of quantum mechanics. There has been considerable progress recently in deriving experimentally robust noncontextuality inequalities. The violation of these inequalities is a sufficient condition for an experiment to not admit a generalized noncontextual model. In this thesis, we present an algorithm to automate the derivation of noncontextuality inequalities. At the heart of this algorithm is a technique called Fourier Motzkin elimination (abbrev. FM elimination). After a brief overview of the generalized notion of contextuality and FM elimination, we proceed to demon- strate how this algorithm works by using it to derive noncontextuality inequalities for a number of examples. Belinfante’s construction, presented first for the sake of pedagogy, is based on a simple proof by Belinfante to demonstrate the invalidity of von Neumann’s notion of noncontextuality. We provide a quantum realization on a single qubit that can violate this inequality. We then go on to discuss paradigmatic proofs of noncontextuality such as the Peres-Mermin square, Mermin’s Star and the 18 vector construction and use them to derive robust noncontextuality inequalities using the algorithm. We also show how one can generalize the noncontextuality inequalities obtained for the Peres-Mermin square to systems with continuous variables

    Fault-tolerant gates on hypergraph product codes

    Get PDF
    L’un des dĂ©fis les plus passionnants auquel nous sommes confrontĂ©s aujourd’hui est la perspective de la construction d’un ordinateur quantique de grande Ă©chelle. L’information quantique est fragile et les implĂ©mentations de circuits quantiques sont imparfaites et su- jettes aux erreurs. Pour rĂ©aliser un tel ordinateur, nous devons construire des circuits quan- tiques tolĂ©rants aux fautes capables d’opĂ©rer dans le monde rĂ©el. Comme il sera expliquĂ© plus loin, les circuits quantiques tolĂ©rant aux fautes nĂ©cessitent plus de ressources que leurs Ă©quivalents idĂ©aux, sans bruit. De maniĂšre gĂ©nĂ©rale, le but de mes recherches est de minimiser les ressources nĂ©cessaires Ă  la construction d’un circuit quantique fiable. Les codes de correction d’erreur quantiques protĂšgent l’information des erreurs en l’encodant de maniĂšre redondante dans plusieurs qubits. Bien que la redondance requiĂšre un plus grand nombre de qubits, ces qubits supplĂ©- mentaires jouent un rĂŽle de protection: cette redondance sert de garantie. Si certains qubits sont endommagĂ©s en raison d’un circuit dĂ©fectueux, nous pourrons toujours rĂ©cupĂ©rer l’informations. PrĂ©parer et maintenir des qubits pendant des durĂ©es suffisamment longues pour effectuer un calcul s’est rĂ©vĂ©lĂ© ĂȘtre une tĂąche expĂ©rimentale difficile. Il existe un Ă©cart important entre le nombre de qubits que nous pouvons contrĂŽler en laboratoire et le nombre requis pour implementer des algorithmes dans lesquels les ordinateurs quantiques ont le dessus sur ceux classiques. Par consĂ©quent, si nous voulons contourner ce problĂšme et rĂ©aliser des circuits quantiques Ă  tolĂ©rance aux fautes, nous devons rendre nos constructions aussi efficaces que possible. Nous devons minimiser le surcoĂ»t, dĂ©fini comme le nombre de qubits physiques nĂ©cessaires pour construire un qubit logique. Dans un article important, Gottesman a montrĂ© que, si certains types de codes de correction d’erreur quantique existaient, cela pourrait alors conduire Ă  la construction de circuits quantiques tolĂ©rants aux fautes avec un surcoĂ»t favorable. Ces codes sont appelĂ©s codes Ă©parses. La proposition de Gottesman dĂ©crivait des techniques pour exĂ©cuter des opĂ©rations logiques sur des codes Ă©parses quantiques arbitraires. Cette proposition Ă©tait limitĂ©e Ă  certains Ă©gards, car elle ne permettait d’exĂ©cuter qu’un nombre constant de portes logiques par unitĂ© de temps. Dans cette thĂšse, nous travaillons avec une classe spĂ©cifique de codes Ă©parses quantiques appelĂ©s codes de produits d’hypergraphes. Nous montrons comment effectuer des opĂ©rations sur ces codes en utilisant une technique appelĂ©e dĂ©formation du code. Notre technique gĂ©nĂ©ralise les codages basĂ©s sur les dĂ©fauts topologiques dans les codes de surface aux codes de produits d’hypergraphes. Nous gĂ©nĂ©ralisons la notion de perforation et montrons qu’elle peut ĂȘtre exprimĂ©e naturellement dans les codes de produits d’hypergraphes. Comme cela sera expliquĂ© en dĂ©tail, les dĂ©fauts de perforation ont eux-mĂȘmes une portĂ©e limitĂ©e. Pour rĂ©aliser une classe de portes plus large, nous intro- duisons un nouveau dĂ©faut appelĂ© trou de ver basĂ© sur les perforations. À titre d’exemple, nous illustrons le fonctionnement de ce dĂ©faut dans le contexte du code de surface. Ce dĂ©faut a quelques caractĂ©ristiques clĂ©s. PremiĂšrement, il prĂ©serve la propriĂ©tĂ© Ă©parses du code au cours de la dĂ©formation, contrairement Ă  une approche naĂŻve qui ne garantie pas cette propriĂ©tĂ©. DeuxiĂšmement, il gĂ©nĂ©ralise de maniĂšre simple les codes de produits d’hypergraphes. Il s’agit du premier cadre suffisamment riche pour dĂ©crire les portes tolĂ©rantes aux fautes de cette classe de codes. Enfin, nous contournons une limitation de l’approche de Gottesman qui ne permettait d’effectuer qu’un certain nombre de portes logiques Ă  un moment donnĂ©. Notre proposition permet d’opĂ©rer sur tous les qubits encodĂ©s Ă  tout moment.One of the most exciting challenges that faces us today is the prospect of building a scalable quantum computer. Implementations of quantum circuits are imperfect and prone to error. In order to realize a scalable quantum computer, we need to construct fault-tolerant quantum circuits capable of working in the real world. As will be explained further below, fault-tolerant quantum circuits require more resources than their ideal, noise-free counterparts. Broadly, the aim of my research is to minimize the resources required to construct a reliable quantum circuit. Quantum error correcting codes protect information from errors by encoding our information redundantly into qubits. Although the number of qubits that we require increases, this redundancy serves as a buffer – in the event that some qubits are damaged because of a faulty circuit, we will still be able to recover our information. Preparing and maintaining qubits for durations long enough to perform a computation has proved to be a challenging experimental task. There is a large gap between the number of qubits we can control in the lab and the number required to implement algorithms where quantum computers have the upper hand over classical ones. Therefore, if we want to circumvent this bottleneck, we need to make fault-tolerant quantum circuits as efficient as possible. To be precise, we need to minimize the overhead, defined as the number of physical qubits required to construct a logical qubit. In an important paper, Gottesman showed that if certain kinds of quantum error correcting codes were to exist, then this could lead to constructions of fault-tolerant quantum circuits with favorable overhead. These codes are called quantum Low-Density Parity-Check (LDPC) codes. Gottesman’s proposal described techniques to perform gates on generic quantum LDPC codes. This proposal limited the number of logical gates that could be performed at any given time. In this thesis, we work with a specific class of quantum LDPC codes called hypergraph product codes. We demonstrate how to perform gates on these codes using a technique called code deformation. Our technique generalizes defect-based encodings in the surface code to hypergraph product codes. We generalize puncture defects and show that they can be expressed naturally in hypergraph product codes. As will be explained in detail, puncture defects are themselves limited in scope; they only permit a limited set of gates. To perform a larger class of gates, we introduce a novel defect called a wormhole that is based on punctures. As an example, we illustrate how this defect works in the context of the surface code. This defect has a few key features. First, it preserves the LDPC property of the code over the course of code deformation. At the outset, this property was not guaranteed. Second, it generalizes in a straightforward way to hypergraph product codes. This is the first framework that is rich enough to describe fault-tolerant gates on this class of codes. Finally, we circumvent a limitation in Gottesman’s approach which only allowed a limited number of logical gates at any given time. Our proposal allows to access the entire code block at any given time

    Pathways out of Poverty in Western Kenya and the Role of Livestock

    Get PDF
    The objectives of the study were to obtain a better understanding of households' pathways into, and out of, poverty, with poverty defined from the communities' own perspective. The authors used a community-based methodology called the 'stages of progress' approach to assess household poverty dynamics in 20 communities and for over 1,700 households representing two different ethnic groups in Western Kenya. The proportion of households that had managed to escape poverty over the last 25 years was ascertained, as well as the proportion of households that had fallen into poverty during the same period. The major reasons for movements into or out of poverty were elicited at both the community and household-level, and in particular, the role that livestock play in the different pathways was examined. The results show considerable movement over the last 2œ decades by households in the study region both into and out of poverty, and the main reasons behind households' escape from poverty are completely different (i.e. not merely the opposite) from the reasons for descent into poverty, and hence have different policy implications in terms of what has been referred to as 'cargo net' versus 'safety net' interventions. Cargo nets help poor people climb out of poverty; safety nets stop people from falling into poverty. Redistributive programs to build up the assets of poor people (such as giving heifers to poor households) may be effective in achieving long-term reductions in chronic poverty, but will have to be complemented by safety net policies.Poverty, livestock, Western Kenya, Vihiga District, Siaya District, stages of progress, Food Security and Poverty, Livestock Production/Industries,

    Combining hard and soft decoders for hypergraph product codes

    Get PDF
    Hypergraph product codes are a class of constant-rate quantum low-density parity-check (LDPC) codes equipped with a linear-time decoder called small-set-flip (SSF). This decoder displays sub-optimal performance in practice and requires very large error correcting codes to be effective. In this work, we present new hybrid decoders that combine the belief propagation (BP) algorithm with the SSF decoder. We present the results of numerical simulations when codes are subject to independent bit-flip and phase-flip errors. We provide evidence that the threshold of these codes is roughly 7.5% assuming an ideal syndrome extraction, and remains close to 3% in the presence of syndrome noise. This result subsumes and significantly improves upon an earlier work by Grospellier and Krishna (arXiv:1810.03681). The low-complexity high-performance of these heuristic decoders suggests that decoding should not be a substantial difficulty when moving from zero-rate surface codes to constant-rate LDPC codes and gives a further hint that such codes are well-worth investigating in the context of building large universal quantum computers.Comment: 17 pages, 4 figures. Comments welcom
    • 

    corecore